Gradient descent

Results: 277



#Item
31Batch-Incremental vs. Instance-Incremental Learning in Dynamic and Evolving Data Jesse Read1 , Albert Bifet2 , Bernhard Pfahringer2 , Geoff Holmes2 1 Department  of Signal Theory and Communications

Batch-Incremental vs. Instance-Incremental Learning in Dynamic and Evolving Data Jesse Read1 , Albert Bifet2 , Bernhard Pfahringer2 , Geoff Holmes2 1 Department of Signal Theory and Communications

Add to Reading List

Source URL: users.ics.aalto.fi

Language: English - Date: 2012-10-25 03:54:39
32Energetic Natural Gradient Descent

Energetic Natural Gradient Descent

Add to Reading List

Source URL: psthomas.com

Language: English - Date: 2016-05-26 17:39:49
    33Bundle Adjustment and SLAM

    Bundle Adjustment and SLAM

    Add to Reading List

    Source URL: www.cvg.ethz.ch

    Language: English - Date: 2016-04-12 09:40:24
    34CS168: The Modern Algorithmic Toolbox Lecture #6: Stochastic Gradient Descent and Regularization Tim Roughgarden & Gregory Valiant∗ April 13, 2016

    CS168: The Modern Algorithmic Toolbox Lecture #6: Stochastic Gradient Descent and Regularization Tim Roughgarden & Gregory Valiant∗ April 13, 2016

    Add to Reading List

    Source URL: theory.stanford.edu

    Language: English - Date: 2016-06-04 09:49:44
    35MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Nonsymmetric preconditioning for conjugate gradient and steepest descent methods Bouwmeester, H.; Dougherty, A.; Knyazev, A.

    MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Nonsymmetric preconditioning for conjugate gradient and steepest descent methods Bouwmeester, H.; Dougherty, A.; Knyazev, A.

    Add to Reading List

    Source URL: www.merl.com

    Language: English - Date: 2016-05-18 10:00:44
    36JMLR: Workshop and Conference Proceedings vol 40:1–46, 2015  Escaping From Saddle Points – Online Stochastic Gradient for Tensor Decomposition Rong Ge

    JMLR: Workshop and Conference Proceedings vol 40:1–46, 2015 Escaping From Saddle Points – Online Stochastic Gradient for Tensor Decomposition Rong Ge

    Add to Reading List

    Source URL: jmlr.org

    Language: English - Date: 2015-07-20 20:08:36
    37Advances in the Minimization of Finite Sums Mark Schmidt Joint work with Nicolas Le Roux, Francis Bach, Reza Babanezhad and Mohamed Ahmed University of British Columbia  Context: Minimizing Finite Sums

    Advances in the Minimization of Finite Sums Mark Schmidt Joint work with Nicolas Le Roux, Francis Bach, Reza Babanezhad and Mohamed Ahmed University of British Columbia Context: Minimizing Finite Sums

    Add to Reading List

    Source URL: www.proba.jussieu.fr

    Language: English
    38Projected Natural Actor-Critic  Philip S. Thomas, William Dabney, Sridhar Mahadevan, and Stephen Giguere School of Computer Science University of Massachusetts Amherst Amherst, MA 01003

    Projected Natural Actor-Critic Philip S. Thomas, William Dabney, Sridhar Mahadevan, and Stephen Giguere School of Computer Science University of Massachusetts Amherst Amherst, MA 01003

    Add to Reading List

    Source URL: psthomas.com

    Language: English - Date: 2013-11-10 12:06:12
    39Natural Gradient Works Eciently in Learning Shun-ichi Amari RIKEN Frontier Research Program Wako-shi, Hirosawa 2-1, Saitama, JAPAN fax: +

    Natural Gradient Works Eciently in Learning Shun-ichi Amari RIKEN Frontier Research Program Wako-shi, Hirosawa 2-1, Saitama, JAPAN fax: +

    Add to Reading List

    Source URL: www.maths.tcd.ie

    Language: English - Date: 2009-10-06 06:49:39
    40arXiv:submitcs.LG] 22 MayEfficient Elastic Net Regularization for Sparse Linear Models Zachary C. Lipton

    arXiv:submitcs.LG] 22 MayEfficient Elastic Net Regularization for Sparse Linear Models Zachary C. Lipton

    Add to Reading List

    Source URL: zacklipton.com

    Language: English - Date: 2015-05-21 21:27:59